Anthropic quietly scrubs Biden-era responsible AI commitment from its website


Anthropic

Anthropic appears to have removed Biden-era commitments to creating safe AI from its website. 

Originally flagged by an AI watchdog called The Midas Project, the language was removed last week from Anthropic’s transparency hub, where the company lists its “voluntary commitments” related to responsible AI development. Though not binding, the deleted language promised to share information and research about AI risks, including bias, with the government. 

Also: Got a suspicious E-ZPass text? It’s a trap – how to spot the scam

Alongside other big tech companies — including OpenAI, Google, and Meta — Anthropic joined the voluntary agreement to self-regulate in July 2023 as part of the Biden administration’s AI safety initiatives, many of which were later codified in Biden’s AI executive order. The companies committed to certain standards for security testing models before release, watermarking AI-generated content, and developing data privacy infrastructure. 

Anthropic later agreed to work with the AI Safety Institute (created under that order), to carry out many of the same priorities. However, the Trump administration will likely dissolve the Institute, leaving its initiatives in limbo. 

Also: The head of US AI safety has stepped down. What now?

Anthropic did not publicly announce the removal of the commitment from its site and maintains that its existing stances on responsible AI are unrelated to or predate Biden-era agreements. 

The move is the latest in a series of public- and private-sector developments around AI — many of which impact the future of AI safety and regulation — under the Trump administration. 

On his first day in office, Trump reversed Biden’s executive order and has already fired several AI experts within the government and axed some research funding. These changes appear to have kicked off a tonal shift in several major AI companies, some of which are taking the opportunity to expand their government contracts and work closely with the government to shape a still-unclear AI policy under Trump. Companies like Google are changing already-loose definitions of responsible AI, for example. 

Overall, the government has lost or is slated to lose much of the already-slim AI regulation created under Biden, and companies ostensibly have even fewer external incentives to place checks on their systems or answer to a third party. Safety checks for bias and discrimination do not appear so far in Trump’s communications on AI.





Source link

Leave a Comment